7 research outputs found

    Off-Policy Deep Reinforcement Learning by Bootstrapping the Covariate Shift

    Full text link
    In this paper we revisit the method of off-policy corrections for reinforcement learning (COP-TD) pioneered by Hallak et al. (2017). Under this method, online updates to the value function are reweighted to avoid divergence issues typical of off-policy learning. While Hallak et al.'s solution is appealing, it cannot easily be transferred to nonlinear function approximation. First, it requires a projection step onto the probability simplex; second, even though the operator describing the expected behavior of the off-policy learning algorithm is convergent, it is not known to be a contraction mapping, and hence, may be more unstable in practice. We address these two issues by introducing a discount factor into COP-TD. We analyze the behavior of discounted COP-TD and find it better behaved from a theoretical perspective. We also propose an alternative soft normalization penalty that can be minimized online and obviates the need for an explicit projection step. We complement our analysis with an empirical evaluation of the two techniques in an off-policy setting on the game Pong from the Atari domain where we find discounted COP-TD to be better behaved in practice than the soft normalization penalty. Finally, we perform a more extensive evaluation of discounted COP-TD in 5 games of the Atari domain, where we find performance gains for our approach.Comment: AAAI 201

    Evolución y estructura de la Policia de la Generalitat – Mossos d’Esquadra

    Get PDF
    Descriptive document on the evolution and current structure of the Police of the Generalitat - Mossos d’Esquadra (PG-ME), police force of Catalonia, Spain. It has been prepared from an institutional perspective, based on official documentary sources and historical research. There is a brief tour of the origins of the PGME and then emphasis is placed on its territorial deployment in the last quarter of the twentieth century, from which, it became the main police force in Catalonia; the uniqueness of the Catalan police model, its organizational structure, functions and current challenges are explained. It constitutes a reference document to know fundamental aspects of this police institution.Documento descriptivo sobre la evolución y estructura actual de la Policia de la Generalitat – Mossos d’Esquadra (PG-ME), cuerpo policial de Catalunya, España. Ha sido elaborado desde una perspectiva institucional, basado en fuentes documentales oficiales e investigaciones históricas. Se realiza un breve recorrido por los orígenes de la PG-ME y luego se pone énfasis en su despliegue territorial en el último cuarto del siglo XX, a partir del cual, se convirtió en la principal fuerza policial en Catalunya; se explica la singularidad del modelo policial catalán, su estructura organizativa, funciones y retos actuales. Constituye un documento de referencia para conocer aspectos fundamentales de esta institución policial

    Neuroevolution of Self-Interpretable Agents

    Full text link
    Inattentional blindness is the psychological phenomenon that causes one to miss things in plain sight. It is a consequence of the selective attention in perception that lets us remain focused on important parts of our world without distraction from irrelevant details. Motivated by selective attention, we study the properties of artificial agents that perceive the world through the lens of a self-attention bottleneck. By constraining access to only a small fraction of the visual input, we show that their policies are directly interpretable in pixel space. We find neuroevolution ideal for training self-attention architectures for vision-based reinforcement learning (RL) tasks, allowing us to incorporate modules that can include discrete, non-differentiable operations which are useful for our agent. We argue that self-attention has similar properties as indirect encoding, in the sense that large implicit weight matrices are generated from a small number of key-query parameters, thus enabling our agent to solve challenging vision based tasks with at least 1000x fewer parameters than existing methods. Since our agent attends to only task critical visual hints, they are able to generalize to environments where task irrelevant elements are modified while conventional methods fail. Videos of our results and source code available at https://attentionagent.github.io/Comment: To appear at the Genetic and Evolutionary Computation Conference (GECCO 2020) as a full pape
    corecore